Goto

Collaborating Authors

 global constraint



An Aligned Constraint Programming Model For Serial Batch Scheduling With Minimum Batch Size

Huertas, Jorge A., Van Hentenryck, Pascal

arXiv.org Artificial Intelligence

Abstract--In serial batch (s-batch) scheduling, jobs from similar families are grouped into batches and processed sequentially to avoid repetitive setups that are required when processing consecutive jobs of different families. Despite its large success in scheduling, only three Constraint Programming (CP) models have been proposed for this problem considering minimum batch sizes, which is a common requirement in many practical settings, including the ion implantation area in semiconductor manufacturing. These existing CP models rely on a predefined virtual set of possible batches that suffers from the curse of dimensionality and adds complexity to the problem. This paper proposes a novel CP model that does not rely on this virtual set. Instead, it uses key alignment parameters that allow it to reason directly on the sequences of same-family jobs scheduled on the machines, resulting in a more compact formulation. This new model is further improved by exploiting the problem's structure with tailored search phases and strengthened inference levels of the constraint propagators. The extensive computational experiments on nearly five thousand instances compare the proposed models against existing methods in the literature, including mixed-integer programming formulations, tabu search meta-heuristics, and CP approaches. The results demonstrate the superiority of the proposed models on small-to-medium instances with up to 100 jobs, and their ability to find solutions up to 25% better than the ones produces by existing methods on large-scale instances with up to 500 jobs, 10 families, and 10 machines. In the current and highly competitive landscape of the manufacturing industry, companies are under growing pressure to minimize production costs and reduce cycle times. A crucial approach to achieving these goals and boosting production efficiency is processing multiple similar jobs in groups called batches [1]. Two types of batching can be distinguished in the scheduling literature, depending on how the jobs are processed inside their batch: (i) parallel batching (p-batch), where jobs inside a batch are processed in parallel at the same time [2]; and (ii) serial batching (s-batch), where jobs inside a batch are processed sequentially, one after the other [3]. The benefits of p-batching in the manufacturing industry are straightforward due to the parallelized processing of the jobs inside a batch. In contrast, the benefits of s-batching usually come from grouping jobs that require similar machine configurations to avoid repetitive setups [4].


Shylock: Causal Discovery in Multivariate Time Series based on Hybrid Constraints

Li, Shuo, Xu, Keqin, Liu, Jie, Ye, Dan

arXiv.org Artificial Intelligence

Abstract--Causal relationship discovery has been drawing increasing attention due to its prevalent application. Existing methods rely on human experience, statistical methods, or graphical criteria methods which are error-prone, stuck at the idealized assumption, and rely on a huge amount of data. And there is also a serious data gap in accessing Multivariate time series(MTS) in many areas, adding difficulty in finding their causal relationship. Existing methods are easy to be over-fitting on them. T o fill the gap we mentioned above, in this paper, we propose Shylock, a novel method that can work well in both few-shot and normal MTS to find the causal relationship. Shylock can reduce the number of parameters exponentially by using group dilated convolution and a sharing kernel, but still learn a better representation of variables with time delay. By combing the global constraint and the local constraint, Shylock achieves information sharing among networks to help improve the accuracy. T o evaluate the performance of Shylock, we also design a data generation method to generate MTS with time delay. We evaluate it on commonly used benchmarks and generated datasets. Extensive experiments show that Shylock outperforms two existing state-of-art methods on both few-shot and normal MTS.


Beyond Pairwise Connections: Extracting High-Order Functional Brain Network Structures under Global Constraints

Zhan, Ling, Huang, Junjie, Yu, Xiaoyao, Chen, Wenyu, Jia, Tao

arXiv.org Artificial Intelligence

Functional brain network (FBN) modeling often relies on local pairwise interactions, whose limitation in capturing high-order dependencies is theoretically analyzed in this paper. Meanwhile, the computational burden and heuristic nature of current hypergraph modeling approaches hinder end-to-end learning of FBN structures directly from data distributions. To address this, we propose to extract high-order FBN structures under global constraints, and implement this as a Global Constraints oriented Multi-resolution (GCM) FBN structure learning framework. It incorporates 4 types of global constraint (signal synchronization, subject identity, expected edge numbers, and data labels) to enable learning FBN structures for 4 distinct levels (sample/subject/group/project) of modeling resolution. Experimental results demonstrate that GCM achieves up to a 30.6% improvement in relative accuracy and a 96.3% reduction in computational time across 5 datasets and 2 task settings, compared to 9 baselines and 10 state-of-the-art methods. Extensive experiments validate the contributions of individual components and highlight the interpretability of GCM. This work offers a novel perspective on FBN structure learning and provides a foundation for interdisciplinary applications in cognitive neuroscience. Code is publicly available on https://github.com/lzhan94swu/GCM.


Gala: Global LLM Agents for Text-to-Model Translation

Cai, Junyang, Kadioglu, Serdar, Dilkina, Bistra

arXiv.org Artificial Intelligence

Natural language descriptions of optimization or satisfaction problems are challenging to translate into correct MiniZinc models, as this process demands both logical reasoning and constraint programming expertise. We introduce Gala, a framework that addresses this challenge with a global agentic approach: multiple specialized large language model (LLM) agents decompose the modeling task by global constraint type. Each agent is dedicated to detecting and generating code for a specific class of global constraint, while a final assembler agent integrates these constraint snippets into a complete MiniZinc model. By dividing the problem into smaller, well-defined sub-tasks, each LLM handles a simpler reasoning challenge, potentially reducing overall complexity. We conduct initial experiments with several LLMs and show better performance against baselines such as one-shot prompting and chain-of-thought prompting. Finally, we outline a comprehensive roadmap for future work, highlighting potential enhancements and directions for improvement.


Overcoming Over-Fitting in Constraint Acquisition via Query-Driven Interactive Refinement

Balafas, Vasileios, Tsouros, Dimos, Ploskas, Nikolaos, Stergiou, Kostas

arXiv.org Artificial Intelligence

Manual modeling in Constraint Programming is a substantial bottleneck, which Constraint Acquisition (CA) aims to automate. However, passive CA methods are prone to over-fitting, often learning models that include spurious global constraints when trained on limited data, while purely active methods can be query-intensive. We introduce a hybrid CA framework specifically designed to address the challenge of over-fitting in CA. Our approach integrates passive learning for initial candidate generation, a query-driven interactive refinement phase that utilizes probabilistic confidence scores (initialized by machine learning priors) to systematically identify over-fitted constraints, and a specialized subset exploration mechanism to recover valid substructures from rejected candidates. A final active learning phase ensures model completeness. Extensive experiments on diverse benchmarks demonstrate that our interactive refinement phase is crucial for achieving high target model coverage and overall model accuracy from limited examples, doing so with manageable query complexity. This framework represents a substantial advancement towards robust and practical constraint acquisition in data-limited scenarios.


Virtual Arc Consistency for Linear Constraints in Cost Function Networks

Montalbano, Pierre, de Givry, Simon, Katsirelos, George

arXiv.org Artificial Intelligence

Abstract--In Constraint Programming, solving discrete minimization problems with hard and soft constraints can be done either using (i) soft global constraints, (ii) a reformulation into a linear program, or (iii) a reformulation into local cost functions. Conversely, the approach (ii) provides a global view with strong bounds, but the size of the reformulation can be problematic. We focus on approach (iii) in which soft arc consistency (SAC) algorithms produce bounds of intermediate quality. Recently, the introduction of linear constraints as local cost functions increases their modeling expressiveness. We adapt an existing SAC algorithm to handle linear constraints. We show that our algorithm significantly improves the lower bounds compared to the original algorithm on several benchmarks, reducing solving time in some cases. Graphical models provide a powerful framework for modeling a variety of combinatorial problems, addressing tasks that range from satisfaction problems to probabilistic models. They employ local functions defined over'small' subset of variables to represent diverse interactions among them. For example, to model the Constraint Satisfaction Problem (CSP) [2], each local function is a constraint evaluating to true (satisfied) or false (falsified).


Constraint Programming Models For Serial Batch Scheduling With Minimum Batch Size

Huertas, Jorge A., Van Hentenryck, Pascal

arXiv.org Artificial Intelligence

In serial batch (s-batch) scheduling, jobs are grouped in batches and processed sequentially within their batch. This paper considers multiple parallel machines, nonidentical job weights and release times, and sequence-dependent setup times between batches of different families. Although s-batch has been widely studied in the literature, very few papers have taken into account a minimum batch size, typical in practical settings such as semiconductor manufacturing and the metal industry. The problem with this minimum batch size requirement has been mostly tackled with dynamic programming and meta-heuristics, and no article has ever used constraint programming (CP) to do so. This paper fills this gap by proposing, three CP models for s-batching with minimum batch size: (i) an Interval Assignment model that computes and bounds the size of the batches using the presence literals of interval variables of the jobs. The computational experiments on standard cases compare the three CP models with two existing mixed-integer programming (MIP) models from the literature. The results demonstrate the versatility of the proposed CP models to handle multiple variations of s-batching; and their ability to produce, in large instances, better solutions than the MIP models faster. Introduction In the current and highly competitive landscape of the manufacturing industry, companies are under growing pressure to minimize production costs and reduce cycle times. One effective strategy to improve efficiency is to process similar tasks, called jobs, together in groups known as batches [1]. There are two main ways to process these batches. In parallel batching (p-batch), all jobs in a batch are processed simultaneously [2]. In contrast, in serial batching (s-batch), jobs in a batch are processed sequentially one after another [3]. The benefits of p-batching are obvious since it saves time by processing multiple jobs at once. Similarly, s-batching is especially useful when grouping similar jobs can prevent repetitive machine setups, which are time-consuming and costly [4]. Serial batching appears in many industries, including metal processing [5], additive manufacturing (3D printing) [5, 6], paint [7] and pharmaceutical production [8], chemical manufacturing [9], and semiconductor manufacturing [10, 11].


Modeling Deontic Modal Logic in the s(CASP) Goal-directed Predicate Answer Set Programming System

Gupta, Gopal, Rajasekharan, Abhiramon, Tudor, Alexis R., Salazar, Elmer, Arias, Joaquín

arXiv.org Artificial Intelligence

We consider the problem of implementing deontic modal logic. We show how (deontic) modal operators can be expressed elegantly using default negation (negation-as-failure) and strong negation present in answer set programming (ASP). We propose using global constraints of ASP to represent obligations and impermissibilities of deontic modal logic. We show that our proposed representation results in the various paradoxes of deontic modal logic being elegantly resolved.


ByteSpan: Information-Driven Subword Tokenisation

Goriely, Zébulon, Salhan, Suchir, Lesci, Pietro, Cheng, Julius, Buttery, Paula

arXiv.org Artificial Intelligence

Recent dynamic tokenisation methods operate directly on bytes and pool their latent representations into patches. This bears similarities to computational models of word segmentation that determine lexical boundaries using spikes in an autoregressive model's prediction error. Inspired by this connection, we explore whether grouping predictable bytes - rather than pooling their representations - can yield a useful fixed subword vocabulary. We propose a new information-driven subword tokeniser, ByteSpan, that uses an external byte-level LM during training to identify contiguous predictable byte sequences and group them into subwords. Experiments show that ByteSpan yields efficient vocabularies with higher morphological alignment scores than BPE for English. Multilingual experiments show similar compression and Rényi efficiency for 25 languages.